Members
Overall Objectives
Research Program
Application Domains
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Advances in Methodological Tools

Participants : Konstantin Avrachenkov, Alain Jean-Marie, Philippe Nain.

Perturbation analysis

In [21] K. Avrachenkov and J.-B. Lasserre (LAAS-CNRS) investigate the analytic perturbation of generalized inverses. Firstly the authors analyze the analytic perturbation of the Drazin generalized inverse (also known as reduced resolvent in operator theory). The approach is based on spectral theory of linear operators as well as on a new notion of group reduced resolvent. It allows one to treat regular and singular perturbations in a unified framework. The authors provide an algorithm for computing the coefficients of the Laurent series of the perturbed Drazin generalized inverse. In particular, the regular part coefficients can be efficiently calculated by recursive formulae. Finally, the authors apply the obtained results to the perturbation analysis of the Moore-Penrose generalized inverse in the real domain.

Markov processes

In [20] K. Avrachenkov, L. Cottatellucci (EURECOM), L. Maggi (CREATE-NET, Italy) and Y.-H. Mao (Beijing Normal Univ., China) consider both discrete-time irreducible Markov chains with circulant transition probability matrix P and continuous-time irreducible Markov processes with circulant transition rate matrix Q. In both cases they provide an expression of all the moments of the entropy mixing time. In the discrete case, they prove that all the moments of the mixing time associated with the transition probability matrix αP+(1-α)P* are maximum in the interval 0α1 when α=1/2, where P* is the transition probability matrix of the time-reversed chain. Similarly, in the continuous case, they show that all the moments of the mixing time associated with the transition rate matrix αQ+(1-α)Q* are also maximum in the interval 0α1 when α=1/2, where Q* is the time-reversed transition rate matrix.

In [23] K. Avrachenkov, in collaboration with A. Piunovskiy and Z. Yi (both from Univ. of Liverpool, UK), study a general homogeneous continuous-time Markov process with restarts. The process is forced to restart from a given distribution at time moments generated by an independent Poisson process. The motivation to study such processes comes from modeling human and animal mobility patterns, restart processes in communication protocols, and from application of restarting random walks in information retrieval. The authors provide a connection between the transition probability functions of the original Markov process and the modified process with restarts. Closed-form expressions for the invariant probability measure of the modified process are derived. When the process evolves on the Euclidean space there is also a closed-form expression for the moments of the modified process. The authors show that the modified process is always positive Harris recurrent and exponentially ergodic with the index equal to (or bigger than) the rate of restarts. Finally, the general results are illustrated by the standard and geometric Brownian motions.

Queueing theory

In [22] K. Avrachenkov, P. Nain and U. Yechiali (Tel Aviv Univ., Israel) consider two independent Poisson streams of jobs flowing into a single-server service system having a limited common buffer that can hold at most one job. If a type-i job (i=1,2) finds the server busy, it is blocked and routed to a separate type-i retrial (orbit) queue that attempts to re-dispatch its jobs at its specific Poisson rate. This creates a system with three dependent queues. Such a queueing system serves as a model for two competing job streams in a carrier sensing multiple access system. We study the queueing system using multi-dimensional probability generating functions, and derive its necessary and sufficient stability conditions while solving a Riemann-Hilbert boundary value problem. Various performance measures are calculated and numerical results are presented. In particular, numerical results demonstrate that the proposed multiple access system with two types of jobs and constant retrial rates provides incentives for the users to respect their contracts.

Control theory

In conjunction with E. Della Vecchia and S. Di Marco (both from National Univ. Rosario, Argentina), A. Jean-Marie has pursued the studies on the Rolling Horizon procedure and other approximations in stochastic control problems. Inspired by the work of A. Ruszczyński, they have considered Markov Decision problems where the metric to be optimized is a risk measure, a metric which generalizes the mathematical expectation and takes risk aversion of agents into account. For infinite-horizon, risk-averse discounted Markov Decision Processes, they have proved approximation bounds which imply the convergence of approximate rolling horizon procedures when the horizon length tends to infinity. They have also analyzed the effects of uncertainties on the transition probabilities, the cost functions and the discount factors [77] .

In [17] K. Avrachenkov, U. Ayesta (LAAS-CNRS), J. Doncel (LAAS-CNRS) and P. Jacko (BCAM, Spain) address the problem of fast and fair transmission of flows in a router, which is a fundamental issue in networks like the Internet. They focus on the relaxed version of the problem obtained by relaxing the fixed buffer capacity constraint that must be satisfied at all time epoch. The relaxation allows one to reduce the multi-flow problem into a family of single-flow problems, for which one can analyze both theoretically and numerically the existence of optimal control policies of special structure. In particular, it is shown that the control can be represented by so-called index policies, but not always by threshold policies. The simulation and numerical results show that the index policy achieves a wide range of desirable properties with respect to fairness between different TCP versions, across users with different round-trip-time and minimum buffer required to achieve full utility of the queue.

Game theory

In [18] K. Avrachenkov, L. Cottatellucci (EURECOM) and L. Maggi (CREATE-NET, Italy) consider simple Markovian games, in which several states succeed each other over time, following an exogenous discrete-time Markov chain. In each state, a different simple static game is played by the same set of players. The authors investigate the approximation of the Shapley-Shubik power index in simple Markovian games (SSM). The authors prove that an exponential number of queries on coalition values is necessary for any deterministic algorithm even to approximate SSM with polynomial accuracy. Motivated by this, the authors propose and study three randomized approaches to compute a confidence interval for SSM. They rest upon two different assumptions, static and dynamic, about the process through which the estimator agent learns the coalition values. Such approaches can also be utilized to compute confidence intervals for the Shapley value in any Markovian game. The proposed methods require a number of queries, which is polynomial in the number of players in order to achieve a polynomial accuracy.

In [19] K. Avrachenkov, L. Cottatellucci (EURECOM) and L. Maggi (CREATE-NET, Italy) study multi-agent Markov decision processes (MDPs) in which cooperation among players is allowed. They find a cooperative payoff distribution procedure (MDP-CPDP) that distributes in the course of the game the payoff that players would earn in the long run game. They show under which conditions such a MDP-CPDP fulfills a time consistency property, contents greedy players, and strengthen the coalition cohesiveness throughout the game. Finally, the authors refine the concept of Core for Cooperative MDPs.